Graph neural networks (GNNs) have been shown to be highly sensitive to the choice of aggregation function. While summing over a node's neighbours can approximate any permutation-invariant function over discrete inputs, Cohen-Karlik et al. [2020] proved there are set-aggregation problems for which summing cannot generalise to unbounded inputs, proposing recurrent neural networks regularised towards permutation-invariance as a more expressive aggregator. We show that these results carry over to the graph domain: GNNs equipped with recurrent aggregators are competitive with state-of-the-art permutation-invariant aggregators, on both synthetic benchmarks and real-world problems. However, despite the benefits of recurrent aggregators, their $O(V)$ depth makes them both difficult to parallelise and harder to train on large graphs. Inspired by the observation that a well-behaved aggregator for a GNN is a commutative monoid over its latent space, we propose a framework for constructing learnable, commutative, associative binary operators. And with this, we construct an aggregator of $O(\log V)$ depth, yielding exponential improvements for both parallelism and dependency length while achieving performance competitive with recurrent aggregators. Based on our empirical observations, our proposed learnable commutative monoid (LCM) aggregator represents a favourable tradeoff between efficient and expressive aggregators.
translated by 谷歌翻译
Searching for a path between two nodes in a graph is one of the most well-studied and fundamental problems in computer science. In numerous domains such as robotics, AI, or biology, practitioners develop search heuristics to accelerate their pathfinding algorithms. However, it is a laborious and complex process to hand-design heuristics based on the problem and the structure of a given use case. Here we present PHIL (Path Heuristic with Imitation Learning), a novel neural architecture and a training algorithm for discovering graph search and navigation heuristics from data by leveraging recent advances in imitation learning and graph representation learning. At training time, we aggregate datasets of search trajectories and ground-truth shortest path distances, which we use to train a specialized graph neural network-based heuristic function using backpropagation through steps of the pathfinding process. Our heuristic function learns graph embeddings useful for inferring node distances, runs in constant time independent of graph sizes, and can be easily incorporated in an algorithm such as A* at test time. Experiments show that PHIL reduces the number of explored nodes compared to state-of-the-art methods on benchmark datasets by 58.5\% on average, can be directly applied in diverse graphs ranging from biological networks to road networks, and allows for fast planning in time-critical robotics domains.
translated by 谷歌翻译
Neural algorithmic reasoning studies the problem of learning algorithms with neural networks, especially with graph architectures. A recent proposal, XLVIN, reaps the benefits of using a graph neural network that simulates the value iteration algorithm in deep reinforcement learning agents. It allows model-free planning without access to privileged information about the environment, which is usually unavailable. However, XLVIN only supports discrete action spaces, and is hence nontrivially applicable to most tasks of real-world interest. We expand XLVIN to continuous action spaces by discretization, and evaluate several selective expansion policies to deal with the large planning graphs. Our proposal, CNAP, demonstrates how neural algorithmic reasoning can make a measurable impact in higher-dimensional continuous control settings, such as MuJoCo, bringing gains in low-data settings and outperforming model-free baselines.
translated by 谷歌翻译
Deploying graph neural networks (GNNs) on whole-graph classification or regression tasks is known to be challenging: it often requires computing node features that are mindful of both local interactions in their neighbourhood and the global context of the graph structure. GNN architectures that navigate this space need to avoid pathological behaviours, such as bottlenecks and oversquashing, while ideally having linear time and space complexity requirements. In this work, we propose an elegant approach based on propagating information over expander graphs. We leverage an efficient method for constructing expander graphs of a given size, and use this insight to propose the EGP model. We show that EGP is able to address all of the above concerns, while requiring minimal effort to set up, and provide evidence of its empirical utility on relevant graph classification datasets and baselines in the Open Graph Benchmark. Importantly, using expander graphs as a template for message passing necessarily gives rise to negative curvature. While this appears to be counterintuitive in light of recent related work on oversquashing, we theoretically demonstrate that negatively curved edges are likely to be required to obtain scalable message passing without bottlenecks. To the best of our knowledge, this is a previously unstudied result in the context of graph representation learning, and we believe our analysis paves the way to a novel class of scalable methods to counter oversquashing in GNNs.
translated by 谷歌翻译
神经算法推理的基石是解决算法任务的能力,尤其是以一种概括分布的方式。尽管近年来,该领域的方法学改进激增,但它们主要集中在建立专家模型上。专业模型能够学习仅执行一种算法或具有相同控制流骨干的算法的集合。相反,在这里,我们专注于构建通才神经算法学习者 - 单个图形神经网络处理器,能够学习执行各种算法,例如分类,搜索,动态编程,路径触发和几何学。我们利用CLRS基准来凭经验表明,就像在感知领域的最新成功一样,通才算法学习者可以通过“合并”知识来构建。也就是说,只要我们能够在单任务制度中学习很好地执行它们,就可以以多任务的方式有效地学习算法。在此激励的基础上,我们为CLR提供了一系列改进,对CLR的输入表示,培训制度和处理器体系结构,将平均单任务性能提高了20%以上。然后,我们进行了多任务学习者的彻底消融,以利用这些改进。我们的结果表明,一位通才学习者有效地结合了专家模型所捕获的知识。
translated by 谷歌翻译
图形神经网络(GNN)已成为一种学习关系数据的强大技术。由于他们执行的消息传递步骤数量相对有限 - 因此一个较小的接收领域,人们对通过结合基础图的结构方面来提高其表现力引起了极大的兴趣。在本文中,我们探讨了亲和力措施作为图形神经网络中的特征,特别是由随机步行引起的措施,包括有效的阻力,击球和通勤时间。我们根据这些功能提出消息传递网络,并评估其在各种节点和图形属性预测任务上的性能。我们的体系结构具有较低的计算复杂性,而我们的功能对于基础图的排列不变。我们计算的措施使网络可以利用图表的连接性能,从而使我们能够超过相关的基准,用于各种任务,通常具有更少的消息传递步骤。在OGB-LSC-PCQM4MV1的最大公共图形回归数据集之一中,我们在编写时获得了最著名的单模验证MAE。
translated by 谷歌翻译
分层神经网络(SNN)是一种在捆上运行的图形神经网络(GNN),该对象是在这些空间之间在其节点和边缘和线性图上与矢量空间配合矢量空间的对象。 SNN已被证明具有有用的理论特性,可帮助解决异性和光滑过度引起的问题。这些模型固有的一种并发症是找到解决任务的良好支架。先前的作品提出了两种截然相反的方法:基于域知识手动构建捆扎,并使用基于梯度的方法端对端学习捆绑。但是,域知识通常不足,而学习捆绑可能会导致过度拟合和重要的计算开销。在这项工作中,我们提出了一种计算带动束带的新型方法,它从黎曼几何形状中汲取灵感:我们利用歧管假设来计算流形和图形感知的正交图,从而最佳地对齐相邻数据点的切线空间。我们表明,与以前的SNN模型相比,这种方法的计算开销较少。总体而言,这项工作提供了代数拓扑结构与差异几何形状之间的有趣联系,我们希望它能朝这个方向引发未来的研究。
translated by 谷歌翻译
在医学图像分析中,皮质区域的自动分割一直是长期以来的挑战。皮质的复杂几何形状通常表示为多边形网格,其分割可以通过基于图的学​​习方法来解决。当对受试者之间的皮质网格对齐时,当前方法会产生明显较差的分割结果,从而限制了它们处理多域数据的能力。在本文中,我们研究了E(n) - 等级图神经网络(EGNN)的实用性,将其性能与普通图神经网络(GNNS)进行了比较。我们的评估表明,由于GNN的能力利用全球坐标系的存在,GNNS在对齐网格上的表现要优于对齐网格。在未对准的网格上,普通GNN的性能大大下降,而e(n) - 等级消息传递通过相同的分割结果。也可以通过在重新调整数据(全球坐标系中的共注册网格)上使用普通GNN来获得最佳结果。
translated by 谷歌翻译
Neural networks leverage robust internal representations in order to generalise. Learning them is difficult, and often requires a large training set that covers the data distribution densely. We study a common setting where our task is not purely opaque. Indeed, very often we may have access to information about the underlying system (e.g. that observations must obey certain laws of physics) that any "tabula rasa" neural network would need to re-learn from scratch, penalising performance. We incorporate this information into a pre-trained reasoning module, and investigate its role in shaping the discovered representations in diverse self-supervised learning settings from pixels. Our approach paves the way for a new class of representation learning, grounded in algorithmic priors.
translated by 谷歌翻译
组合优化是运营研究和计算机科学领域的一个公认领域。直到最近,它的方法一直集中在孤立地解决问题实例,而忽略了它们通常源于实践中的相关数据分布。但是,近年来,人们对使用机器学习,尤其是图形神经网络(GNN)的兴趣激增,作为组合任务的关键构件,直接作为求解器或通过增强确切的求解器。GNN的电感偏差有效地编码了组合和关系输入,因为它们对排列和对输入稀疏性的意识的不变性。本文介绍了对这个新兴领域的最新主要进步的概念回顾,旨在优化和机器学习研究人员。
translated by 谷歌翻译